110 research outputs found

    How Serious are Methodological Issues in Surveys? A Reexamination of the Clarence Thomas Polls^T

    Get PDF
    Opinion polling procedures allow for reasonable inferences about attitude changes. We examined this contention using surveys about the nomination of Clarence Thomas. In this situation, prior theory allowed us to predict the direction of changes, surveys had been conducted by a number of organizations, and substantial information was available about the methodology used in the surveys. As a result we concluded that the deteriorating opinions of Thomas were real.surveys, methodology

    Contributors to the Fall Issue/Notes

    Get PDF
    Notes by John F. Bodle, E. A. Steffen, Edward G. Coleman, Francis W. Collopy, John L. Globensky, Lenton G. Sculthorp, John H. O\u27Hara, John B. Palmer, and John G. Smith

    Recent Decisions

    Get PDF
    Comments on recent decisions by Francis W. Collopy, William J. Verdonk, Vincent C. A. Scully, John F. Mendoza, James L. O\u27Brien, Bernard L. Weddel, William B. Wombacher, Thomas L. Smith, Henry M. Shine, Jr., James D. Matthews, Clifford A. Goodrich, Jr., Wilmer L. McLaughlin, and William M. Dickson

    Rule-Based Forecasting: Using Judgment in Time-Series Extrapolation

    Get PDF
    Rule-Based Forecasting (RBF) is an expert system that uses judgment to develop and apply rules for combining extrapolations. The judgment comes from two sources, forecasting expertise and domain knowledge. Forecasting expertise is based on more than a half century of research. Domain knowledge is obtained in a structured way; one example of domain knowledge is managers= expectations about trends, which we call “causal forces.” Time series are described in terms of 28 conditions, which are used to assign weights to extrapolations. Empirical results on multiple sets of time series show that RBF produces more accurate forecasts than those from traditional extrapolation methods or equal-weights combined extrapolations. RBF is most useful when it is based on good domain knowledge, the domain knowledge is important, the series is well behaved (such that patterns can be identified), there is a strong trend in the data, and the forecast horizon is long. Under ideal conditions, the error for RBF’s forecasts were one-third less than those for equal-weights combining. When these conditions are absent, RBF neither improves nor harms forecast accuracy. Some of RBF’s rules can be used with traditional extrapolation procedures. In a series of studies, rules based on causal forces improved the selection of forecasting methods, the structuring of time series, and the assessment of prediction intervals

    Extrapolation for Time-Series and Cross-Sectional Data

    Get PDF
    Extrapolation methods are reliable, objective, inexpensive, quick, and easily automated. As a result, they are widely used, especially for inventory and production forecasts, for operational planning for up to two years ahead, and for long-term forecasts in some situations, such as population forecasting. This paper provides principles for selecting and preparing data, making seasonal adjustments, extrapolating, assessing uncertainty, and identifying when to use extrapolation. The principles are based on received wisdom (i.e., experts’ commonly held opinions) and on empirical studies. Some of the more important principles are:• In selecting and preparing data, use all relevant data and adjust the data for important events that occurred in the past.• Make seasonal adjustments only when seasonal effects are expected and only if there is good evidence by which to measure them.• In extrapolating, use simple functional forms. Weight the most recent data heavily if there are small measurement errors, stable series, and short forecast horizons. Domain knowledge and forecasting expertise can help to select effective extrapolation procedures. When there is uncertainty, be conservative in forecasting trends. Update extrapolation models as new data are received.• To assess uncertainty, make empirical estimates to establish prediction intervals.• Use pure extrapolation when many forecasts are required, little is known about the situation, the situation is stable, and expert forecasts might be biased

    Selecting and Ranking Time Series Models Using the NOEMON Approach

    Full text link
    Abstract. In this work, we proposed to use the NOEMON approach to rank and select time series models. Given a time series, the NOEMON approach provides a ranking of the candidate models to forecast that series, by combining the outputs of different learners. The best ranked models are then returned as the selected ones. In order to evaluate the proposed solution, we implemented a prototype that used MLP neural networks as the learners. Our experiments using this prototype revealed encouraging results.

    Evaluating Forecasting Methods

    Get PDF
    Ideally, forecasting methods should be evaluated in the situations for which they will be used. Underlying the evaluation procedure is the need to test methods against reasonable alternatives. Evaluation consists of four steps: testing assumptions, testing data and methods, replicating outputs, and assessing outputs. Most principles for testing forecasting methods are based on commonly accepted methodological procedures, such as to prespecify criteria or to obtain a large sample of forecast errors. However, forecasters often violate such principles, even in academic studies. Some principles might be surprising, such as do not use R-square, do not use Mean Square Error, and do not use the within-sample fit of the model to select the most accurate time-series model. A checklist of 32 principles is provided to help in systematically evaluating forecasting methods
    corecore